8 research outputs found

    IMAGE-BASED RESPIRATORY MOTION EXTRACTION AND RESPIRATION-CORRELATED CONE BEAM CT (4D-CBCT) RECONSTRUCTION

    Get PDF
    Accounting for respiration motion during imaging helps improve targeting precision in radiation therapy. Respiratory motion can be a major source of error in determining the position of thoracic and upper abdominal tumor targets during radiotherapy. Thus, extracting respiratory motion is a key task in radiation therapy planning. Respiration-correlated or four-dimensional CT (4DCT) imaging techniques have been recently integrated into imaging systems for verifying tumor position during treatment and managing respiration-induced tissue motion. The quality of the 4D reconstructed volumes is highly affected by the respiratory signal extracted and the phase sorting method used. This thesis is divided into two parts. In the first part, two image-based respiratory signal extraction methods are proposed and evaluated. Those methods are able to extract the respiratory signals from CBCT images without using external sources, implanted markers or even dependence on any structure in the images such as the diaphragm. The first method, called Local Intensity Feature Tracking (LIFT), extracts the respiratory signal depending on feature points extracted and tracked through the sequence of projections. The second method, called Intensity Flow Dimensionality Reduction (IFDR), detects the respiration signal by computing the optical flow motion of every pixel in each pair of adjacent projections. Then, the motion variance in the optical flow dataset is extracted using linear and non-linear dimensionality reduction techniques to represent a respiratory signal. Experiments conducted on clinical datasets showed that the respiratory signal was successfully extracted using both proposed methods and it correlates well with standard respiratory signals such as diaphragm position and the internal markers’ signal. In the second part of this thesis, 4D-CBCT reconstruction based on different phase sorting techniques is studied. The quality of the 4D reconstructed images is evaluated and compared for different phase sorting methods such as internal markers, external markers and image-based methods (LIFT and IFDR). Also, a method for generating additional projections to be used in 4D-CBCT reconstruction is proposed to reduce the artifacts that result when reconstructing from an insufficient number of projections. Experimental results showed that the feasibility of the proposed method in recovering the edges and reducing the streak artifacts

    An IoT Machine Learning-Based Mobile Sensors Unit for Visually Impaired People

    No full text
    Visually impaired people face many challenges that limit their ability to perform daily tasks and interact with the surrounding world. Navigating around places is one of the biggest challenges that face visually impaired people, especially those with complete loss of vision. As the Internet of Things (IoT) concept starts to play a major role in smart cities applications, visually impaired people can be one of the benefitted clients. In this paper, we propose a smart IoT-based mobile sensors unit that can be attached to an off-the-shelf cane, hereafter a smart cane, to facilitate independent movement for visually impaired people. The proposed mobile sensors unit consists of a six-axis accelerometer/gyro, ultrasonic sensors, GPS sensor, cameras, a digital motion processor and a single credit-card-sized single-board microcomputer. The unit is used to collect information about the cane user and the surrounding obstacles while on the move. An embedded machine learning algorithm is developed and stored in the microcomputer memory to identify the detected obstacles and alarm the user about their nature. In addition, in case of emergencies such as a cane fall, the unit alerts the cane user and their guardian. Moreover, a mobile application is developed to be used by the guardian to track the cane user via Google Maps using a mobile handset to ensure safety. To validate the system, a prototype was developed and tested

    Microwave Imaging for Early Breast Cancer Detection: Current State, Challenges, and Future Directions

    No full text
    Breast cancer is the most commonly diagnosed cancer type and is the leading cause of cancer-related death among females worldwide. Breast screening and early detection are currently the most successful approaches for the management and treatment of this disease. Several imaging modalities are currently utilized for detecting breast cancer, of which microwave imaging (MWI) is gaining quite a lot of attention as a promising diagnostic tool for early breast cancer detection. MWI is a noninvasive, relatively inexpensive, fast, convenient, and safe screening tool. The purpose of this paper is to provide an up-to-date survey of the principles, developments, and current research status of MWI for breast cancer detection. This paper is structured into two sections; the first is an overview of current MWI techniques used for detecting breast cancer, followed by an explanation of the working principle behind MWI and its various types, namely, microwave tomography and radar-based imaging. In the second section, a review of the initial experiments along with more recent studies on the use of MWI for breast cancer detection is presented. Furthermore, the paper summarizes the challenges facing MWI as a breast cancer detection tool and provides future research directions. On the whole, MWI has proven its potential as a screening tool for breast cancer detection, both as a standalone or complementary technique. However, there are a few challenges that need to be addressed to unlock the full potential of this imaging modality and translate it to clinical settings

    Exogenous Contrast Agents in Photoacoustic Imaging: An In Vivo Review for Tumor Imaging

    No full text
    The field of cancer theranostics has grown rapidly in the past decade and innovative ‘biosmart’ theranostic materials are being synthesized and studied to combat the fast growth of cancer metastases. While current state-of-the-art oncology imaging techniques have decreased mortality rates, patients still face a diminished quality of life due to treatment. Therefore, improved diagnostics are needed to define in vivo tumor growths on a molecular level to achieve image-guided therapies and tailored dosage needs. This review summarizes in vivo studies that utilize contrast agents within the field of photoacoustic imaging—a relatively new imaging modality—for tumor detection, with a special focus on imaging and transducer parameters. This paper also details the different types of contrast agents used in this novel diagnostic field, i.e., organic-based, metal/inorganic-based, and dye-based contrast agents. We conclude this review by discussing the challenges and future direction of photoacoustic imaging

    Fluoroscopic 3D Image Generation from Patient-Specific PCA Motion Models Derived from 4D-CBCT Patient Datasets: A Feasibility Study

    No full text
    A method for generating fluoroscopic (time-varying) volumetric images using patient-specific motion models derived from four-dimensional cone-beam CT (4D-CBCT) images was developed. 4D-CBCT images acquired immediately prior to treatment have the potential to accurately represent patient anatomy and respiration during treatment. Fluoroscopic 3D image estimation is performed in two steps: (1) deriving motion models and (2) optimization. To derive motion models, every phase in a 4D-CBCT set is registered to a reference phase chosen from the same set using deformable image registration (DIR). Principal components analysis (PCA) is used to reduce the dimensionality of the displacement vector fields (DVFs) resulting from DIR into a few vectors representing organ motion found in the DVFs. The PCA motion models are optimized iteratively by comparing a cone-beam CT (CBCT) projection to a simulated projection computed from both the motion model and a reference 4D-CBCT phase, resulting in a sequence of fluoroscopic 3D images. Patient datasets were used to evaluate the method by estimating the tumor location in the generated images compared to manually defined ground truth positions. Experimental results showed that the average tumor mean absolute error (MAE) along the superior–inferior (SI) direction and the 95th percentile in two patient datasets were 2.29 and 5.79 mm for patient 1, and 1.89 and 4.82 mm for patient 2. This study demonstrated the feasibility of deriving 4D-CBCT-based PCA motion models that have the potential to account for the 3D non-rigid patient motion and localize tumors and other patient anatomical structures on the day of treatment

    Regression Analysis between the Different Breast Dose Quantities Reported in Digital Mammography and Patient Age, Breast Thickness, and Acquisition Parameters

    No full text
    Breast cancer is the leading cause of cancer death among women worldwide. Screening mammography is considered the primary imaging modality for the early detection of breast cancer. The radiation dose from mammography increases the patients’ risk of radiation-induced cancer. The mean glandular dose (MGD), or the average glandular dose (AGD), provides an estimate of the absorbed dose of radiation by the glandular tissues of a breast. In this paper, MGD is estimated for the craniocaudal (CC) and mediolateral–oblique (MLO) views using entrance skin dose (ESD), X-ray spectrum information, patient age, breast glandularity, and breast thickness. Moreover, a regression analysis is performed to evaluate the impact of mammography acquisition parameters, age, and breast thickness on the estimated MGD and other machine-produced dose quantities, namely, ESD and organ dose (OD). Furthermore, a correlation study is conducted to evaluate the correlation between the ESD and OD, and the estimated MGD per image view. This retrospective study was applied to a dataset of 2035 mammograms corresponding to a cohort of 486 subjects with an age range of 28–86 years who underwent screening mammography examinations. Linear regression metrics were calculated to evaluate the strength of the correlations. The mean (and range) MGD for the CC view was 0.832 (0.110–3.491) mGy and for the MLO view was 0.995 (0.256–2.949) mGy. All the mammography dose quantities strongly correlated with tube exposure (mAs): ESD (R2 = 0.938 for the CC view and R2 = 0.945 for the MLO view), OD (R2 = 0.969 for the CC view and R2 = 0.983 for the MLO view), and MGD (R2 = 0.980 for the CC view and R2 = 0.972 for the MLO view). Breast thickness showed a better correlation with all the mammography dose quantities than patient age, which showed a poor correlation. Moreover, a strong correlation was found between the calculated MGD and both the ESD (R2 = 0.929 for the CC view and R2 = 0.914 for the MLO view) and OD (R2 = 0.971 for the CC view and R2 = 0.972 for the MLO view). Furthermore, it was found that the MLO scan views yield a slightly higher dose compared to CC scan views. It was also found that the glandular absorbed dose is more dependent on glandularity than size. Despite being more reflective of the dose absorbed by the glandular tissue than OD and ESD, MGD is considered labor-intensive and time-consuming to estimate

    Scale-invariant optical flow in tracking using a pan-tilt-zoom camera

    No full text

    An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge

    No full text
    Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning
    corecore